Randomized Projection Methods for Convex Feasibility: Conditioning and Convergence Rates
نویسندگان
چکیده
منابع مشابه
Randomized Methods for Linear Constraints: Convergence Rates and Conditioning
We study randomized variants of two classical algorithms: coordinate descent for systems of linear equations and iterated projections for systems of linear inequalities. Expanding on a recent randomized iterated projection algorithm of Strohmer and Vershynin for systems of linear equations, we show that, under appropriate probability distributions, the linear rates of convergence (in expectatio...
متن کاملHilbertian Convex Feasibility Problem: Convergence of Projection Methods∗
The classical problem of finding a point in the intersection of countably many closed and convex sets in a Hilbert space is considered. Extrapolated iterations of convex combinations of approximate projections onto subfamilies of sets are investigated to solve this problem. General hypotheses are made on the regularity of the sets and various strategies are considered to control the order in wh...
متن کاملConvex feasibility modeling and projection methods for sparse signal recovery
A computationally-efficient method for recovering sparse signals from a series of noisy observations, known as the problem of compressed sensing (CS), is presented. The theory of CS usually leads to a constrained convex minimization problem. In this work, an alternative outlook is proposed. Instead of solving the CS problem as an optimization problem, it is suggested to transform the optimizati...
متن کاملHow good are projection methods for convex feasibility problems?
We consider simple projection methods for solving convex feasibility problems. Both successive and sequential methods are considered, and heuristics to improve these are suggested. Unfortunately, particularly given the large literature which might make one think otherwise, numerical tests indicate that in general none of the variants considered are especially effective or competitive with more ...
متن کاملConvergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the s...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SIAM Journal on Optimization
سال: 2019
ISSN: 1052-6234,1095-7189
DOI: 10.1137/18m1167061